37 research outputs found

    Personalised exercise recognition towards improved self-management of musculoskeletal disorders.

    Get PDF
    Musculoskeletal Disorders (MSD) have been the primary contributor to the global disease burden, with increased years lived with disability. Such chronic conditions require self-management, typically in the form of maintaining an active lifestyle while adhering to prescribed exercises. Today, exercise monitoring in fitness applications wholly relies on user input. Effective digital intervention for self-managing MSD should be capable of monitoring, recognising and assessing performance quality of exercises in real-time. Exercise Recognition (ExRec) is the machine learning problem that investigates the automation of exercise monitoring. Multiple challenges arise when implementing high performing ExRec algorithms for a wide range of exercises performed by people from different demographics. In this thesis, we explore three personalisation challenges. Different sensor combinations can be used to capture exercises, to improve usability and deployability in restricted settings. Accordingly, a recognition algorithm should be adaptable to different sensor combinations. To address this challenge, we investigate the best feature learners for individual sensors, and effective fusion methods that minimise the need for data and very deep architectures. We implement a modular hybrid attention fusion architecture that emphasises significant features and understates noisy features from multiple sensors for each exercise. Persons perform exercises differently when not supervised; they incorporate personal rhythms and nuances. Accordingly, a recognition algorithm should be able to adapt to different persons. To address the personalised recognition challenge, we investigate how to adapt learned models to new, unseen persons. Key to achieving effective personalisation is the ability to personalise with few data instances. Accordingly, we bring together personalisation methods and advances in meta-learning to introduce personalised meta-learning methodology. The resulting personalised meta-learners are learning to adapt to new end-users with only few data instances. It is infeasible to design algorithms to recognise all expected exercises a physiotherapist would prescribe. Accordingly, the ability to integrate new exercises after deployment is another challenge in ExRec. The challenge of adapting to unseen exercises is known as open-ended recognition. We extend the personalised meta-learning methodology to the open-ended domain, such that an end-user can introduce a new exercise to the model with only a few data instances. Finally, we address the lack of publicly available data and collaborate with health science researchers to curate a heterogeneous multi-modal physiotherapy exercise dataset, MEx. We conduct comprehensive evaluations of the proposed methods using MEx to demonstrate that our methods successfully address the three ExRec challenges. We also show that our contributions are not restricted to the domain of ExRec, but are applicable in a wide range of activity recognition tasks by extending the evaluation to other human activity recognition domains

    A user-centred evaluation of DisCERN: discovering counterfactuals for code vulnerability detection and correction.

    Get PDF
    Counterfactual explanations highlight actionable knowledge which helps to understand how a machine learning model outcome could be altered to a more favourable outcome. Understanding actionable corrections in source code analysis can be critical to proactively mitigate security attacks that are caused by known vulnerabilities. In this paper, we present the DisCERN explainer for discovering counterfactuals for code vulnerability correction. Given a vulnerable code segment, DisCERN finds counterfactual (i.e. non-vulnerable) code segments and recommends actionable corrections. DisCERN uses feature attribution knowledge to identify potentially vulnerable code statements. Subsequently, it applies a substitution-focused correction, suggesting suitable fixes by analysing the nearest-unlike neighbour. Overall, DisCERN aims to identify vulnerabilities and correct them while preserving both the code syntax and the original functionality of the code. A user study evaluated the utility of counterfactuals for vulnerability detection and correction compared to more commonly used feature attribution explainers. The study revealed that counterfactuals foster positive shifts in mental models, effectively guiding users toward making vulnerability corrections. Furthermore, counterfactuals significantly reduced the cognitive load when detecting and correcting vulnerabilities in complex code segments. Despite these benefits, the user study showed that feature attribution explanations are still more widely accepted than counterfactuals, possibly due to the greater familiarity with the former and the novelty of the latter. These findings encourage further research and development into counterfactual explanations, as they demonstrate the potential for acceptability over time among developers as a reliable resource for both coding and training

    Reasoning with multi-modal sensor streams for m-health applications.

    Get PDF
    Musculoskeletal Disorders have a long term impact on individuals as well as on the community. They require self-management, typically in the form of maintaining an active lifestyle that adheres to prescribed exercises regimes. In the recent past m-health applications gained popularity by gamification of physical activity monitoring and has had a positive impact on general health and well-being. However maintaining a regular exercise routine with correct execution needs more sophistication in human movement recognition compared to monitoring ambulatory activities. In this research we propose a digital intervention which can intercept, recognize and evaluate exercises in real-time with a view to supporting exercise self-management plans. We plan to compile a heterogeneous multi-sensor dataset for exercises, then we will improve upon state of the art machine learning models implement reasoning methods to recognise exercises and evaluate performance quality

    Heterogeneous multi-modal sensor fusion with hybrid attention for exercise recognition.

    Get PDF
    Exercise adherence is a key component of digital behaviour change interventions for the self-management of musculoskeletal pain. Automated monitoring of exercise adherence requires sensors that can capture patients performing exercises and Machine Learning (ML) algorithms that can recognise exercises. In contrast to ambulatory activities that are recognisable with a wrist accelerometer data; exercises require multiple sensor modalities because of the complexity of movements and the settings involved. Exercise Recognition (ExR) pose many challenges to ML researchers due to the heterogeneity of the sensor modalities (e.g. image/video streams, wearables, pressure mats). We recently published MEx, a benchmark dataset for ExR, to promote the study of new and transferable HAR methods to improve ExR and benchmarked the state-of-the-art ML algorithms on 4 modalities. The results highlighted the need for fusion methods that unite the individual strengths of modalities. In this paper we explore fusion methods with focus on attention and propose a novel multi-modal hybrid attention fusion architecture mHAF for ExR. We achieve the best performance of 96.24% (F1-measure) with a modality combination of a pressure mat, a depth camera and an accelerometer on the thigh. mHAF significantly outperforms multiple baselines and the contribution of model components are verified with an ablation study. The benefits of attention fusion are clearly demonstrated by visualising attention weights; showing how mHAF learns feature importance and modality combinations suited for different exercise classes. We highlight the importance of improving deployability and minimising obtrusiveness by exploring the best performing 2 and 3 modality combinations

    A knowledge-light approach to personalised and open-ended human activity recognition.

    Get PDF
    Human Activity Recognition (HAR) is a core component of clinical decision support systems that rely on activity monitoring for self-management of chronic conditions such as Musculoskeletal Disorders. Deployment success of such applications in part depend on their ability to adapt to individual variations in human movement and to facilitate a range of human activity classes. Research in personalised HAR aims to learn models that are sensitive to the subtle nuances in human movement whilst Open-ended HAR learns models that can recognise activity classes out of the pre-defined set available at training. Current approaches to personalised HAR impose a data collection burden on the end user; whilst Open-ended HAR algorithms are heavily reliant on intermediary-level class descriptions. Instead of these 'knowledge-intensive' HAR algorithms; in this article, we propose a 'knowledge-light' method. Specifically, we show how by using a few seconds of raw sensor data, obtained through micro-interactions with the end-user, we can effectively personalise HAR models and transfer recognition functionality to new activities with zero re-training of the model after deployment. We introduce a Personalised Open-ended HAR algorithm, MNZ, a user context aware Matching Network architecture and evaluate on 3 HAR data sources. Performance results show up to 48.9% improvement with personalisation and up to 18.3% improvement compared to the most common 'knowledge-intensive' Open-ended HAR algorithms

    Learning to recognise exercises for the self-management of low back pain.

    Get PDF
    Globally, Low back pain (LBP) is one of the top three contributors to years lived with disability. Self-management with an active lifestyle is the cornerstone for preventing and managing LBP. Digital interventions are introduced in the recent past to improve and reinforce self-management where regular exercises are a core component and they rely on self-reporting to keep track of exercises performed. This data directly influences the recommendations made by the digital intervention where accurate and reliable reporting is fundamental to the success of the intervention. In addition, performing exercises with precision is important where current systems are unable to provide the guidance required. The main challenge to implementing an end-to-end solution is the lack of public sensor-rich datasets to implement Machine Learning algorithms to perform Exercise Recognition (ExR) and qualitative analysis. Accordingly we introduce the ExR benchmark dataset “MEx”, which we have shared publicly to encourage furthering research. In this paper we benchmark state-of-the art classification algorithms with deep and shallow architectures on each sensor and achieve performance up to 90.2%. We recognise the scope of each sensor in capturing exercise movements with confusion matrices and highlight the most suitable sensors for deployment considering performance vs. obtrusiveness

    Human activity recognition with deep metric learners.

    Get PDF
    Establishing a strong foundation for similarity-based return is a top priority in Case-Based Reasoning (CBR) systems. Deep Metric Learners (DMLs) are a group of neural network architectures which learn to optimise case representations for similarity-based return by training upon multiple cases simultaneously to incorporate relationship knowledge. This is particularly important in the Human Activity Recognition (HAR) domain, where understanding similarity between cases supports aspects such as personalisation and open-ended HAR. In this paper, we perform a short review of three DMLs and compare their performance across three HAR datasets. Our findings support research which indicates DMLs are valuable to improve similarity-based return and indicate that considering more cases simultaneously offers better performance
    corecore